# English NLP

Neobert GGUF
MIT
This is a static quantized version of the chandar-lab/NeoBERT model, aiming to reduce model storage space and computational resource requirements.
Large Language Model Transformers English
N
mradermacher
219
1
Seed Coder 8B Instruct GGUF
MIT
This model has undergone self-quantization processing, with output and embedding tensors quantized to f16 format, and the remaining tensors quantized to q5_k or q6_k format, resulting in a smaller size while maintaining performance comparable to pure f16.
Large Language Model English
S
ZeroWw
434
1
Falcon H1 0.5B Base
Other
Falcon-H1 is a decoder-only causal model with a hybrid Transformers + Mamba architecture developed by TII, focusing on English NLP tasks with excellent performance.
Large Language Model Transformers
F
tiiuae
485
10
T0 3B
Apache-2.0
T0++ is a natural language processing model based on the T5 architecture, achieving zero-shot task generalization through multi-task prompt training, outperforming GPT-3 on various NLP tasks while being more compact.
Large Language Model Transformers English
T
bigscience
3,723
100
Olmo 2 0425 1B SFT
Apache-2.0
OLMo 2 1B SFT is a supervised fine-tuned version of the OLMo-2-0425-1B model, trained on the Tulu 3 dataset, designed to achieve state-of-the-art performance across multiple tasks.
Large Language Model Transformers English
O
allenai
1,759
2
Falcon E 1B Base
Other
Falcon-E-1B-Base is an efficient 1.58-bit language model developed by TII, featuring a pure Transformer architecture and optimized for edge devices.
Large Language Model Transformers
F
tiiuae
53
4
Bert Base Uncased Finetuned Rte Run Trial3
Apache-2.0
A model fine-tuned based on bert-base-uncased for textual entailment recognition tasks, with an accuracy of 66.43%
Text Classification Transformers
B
BoranIsmet
59
1
Bonsai
Bonsai is a small ternary-weighted language model with 500 million parameters, built on the Llama architecture and using the Mistral tokenizer, trained on fewer than 5 billion tokens.
Large Language Model Transformers
B
deepgrove
113
8
Ro001
Apache-2.0
A text classification model fine-tuned based on distilbert-base-uncased, achieving an F1 score of 0.6147
Large Language Model Transformers
R
jiyometrik
23
1
Llama 3.1 Tulu 3.1 8B
Tülu 3 is a leading family of instruction-following models, offering fully open-source data, code, and training methodologies as a comprehensive guide to modern technology. Version 3.1 features improvements in the reinforcement learning phase, delivering enhanced overall performance.
Large Language Model Transformers English
L
allenai
3,643
33
Stella En 400M V5
MIT
Stella 400M v5 is an English text embedding model that excels in multiple text classification and retrieval tasks.
Large Language Model Transformers Other
S
billatsectorflow
7,630
3
Distilbert Emotion
Apache-2.0
A sentiment analysis model fine-tuned on distilbert-base-uncased, achieving 94% accuracy on the evaluation set
Text Classification Transformers
D
asimmetti
32
1
EMOTION AI
Apache-2.0
A sentiment analysis model based on DistilBERT, fine-tuned on an unknown dataset with an accuracy of 56.16%
Text Classification Transformers
E
Hemg
20
1
Modernbert Large Zeroshot V1
MIT
A natural language inference model fine-tuned based on ModernBERT-large, specifically designed for zero-shot classification tasks
Text Classification Transformers English
M
r-f
54
2
News Category Classifier
A lightweight text classification model based on the DistilBERT architecture, designed to classify English news into four major categories: World, Sports, Business, and Technology
Text Classification Transformers
N
ranudee
526
1
Dunzhang Stella En 400M V5
MIT
Stella 400M is a medium-scale English text processing model focused on classification and information retrieval tasks.
Text Classification Transformers Other
D
Marqo
17.20k
7
Meta Llama 3.1 8B Instruct Abliterated GGUF
MIT
A text generation model employing mixed quantization techniques, with output and embedding tensors in f16 format and other tensors quantized using q5_k or q6_k. It has a smaller size than the standard q8_0 quantization format while maintaining performance comparable to the pure f16 version.
Large Language Model English
M
ZeroWw
98
17
Clinical T5
Apache-2.0
This is a clinical note summarization model fine-tuned based on the T5-small model, primarily used for generating summaries of clinical notes.
Text Generation Transformers English
C
hossboll
589
0
Roberta Base Nli
This model is a natural language inference model based on the RoBERTa architecture, specifically fine-tuned for depression detection tasks.
Text Classification Transformers English
R
kwang123
18
1
Distilbert Topic Abstract Classification
Apache-2.0
A text classification model fine-tuned based on distilbert-base-uncased for topic abstract classification tasks
Text Classification Transformers
D
Eitanli
43
1
Mental Alpaca
This is a fine-tuned large language model for mental health prediction using online text data.
Large Language Model Transformers English
M
NEU-HAI
180
9
Gpt2 Emotion
MIT
An English emotional text generation model based on the GPT-2 architecture, supporting six basic emotion categories as generation conditions.
Large Language Model Transformers English
G
heegyu
76
2
Instructor Large
Apache-2.0
INSTRUCTOR is a text embedding model based on the T5 architecture, focusing on sentence similarity calculation and text classification tasks, and supports English language processing.
Text Embedding Transformers English
I
hkunlp
186.12k
508
Distilbert Base Uncased Finetuned Squad
Apache-2.0
This model is a fine-tuned version of DistilBERT on the SQuAD question-answering dataset, designed for QA tasks.
Question Answering System Transformers
D
pfsv
15
1
Distilbert Base Cased Finetuned Squadv2
Apache-2.0
This model is a question-answering model fine-tuned on the SQuAD v2 dataset based on DistilBERT, suitable for reading comprehension tasks.
Question Answering System Transformers
D
monakth
13
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
Lightweight Q&A model based on DistilBERT, fine-tuned on the SQuAD dataset
Question Answering System Transformers
D
shizil
15
0
Med KEBERT
Openrail
This is a BERT-based pre-trained language model for the biomedical domain, suitable for processing biomedical text data.
Large Language Model Transformers English
M
xmcmic
769
1
Deberta V3 Xsmall Squad2
Apache-2.0
A question answering system model based on the DeBERTa-v3-xsmall architecture, specifically fine-tuned for the SQuAD2.0 dataset
Question Answering System Transformers
D
nlpconnect
21
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
This model is a question-answering model fine-tuned on the SQuAD v2 dataset based on DistilBERT, suitable for reading comprehension tasks.
Question Answering System Transformers
D
shila
15
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
A question-answering model based on DistilBERT, fine-tuned on the SQuAD v2 dataset, suitable for QA tasks
Question Answering System Transformers
D
MMVos
15
0
Distilbert Base Uncased Becasv2 6
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on the becasv2 dataset, primarily used for text classification tasks.
Large Language Model Transformers
D
Evelyn18
16
0
Distilbert Base Uncased Becasv2 1
Apache-2.0
A fine-tuned version of the distilbert-base-uncased model on the becasv2 dataset, primarily used for text-related tasks.
Large Language Model Transformers
D
Evelyn18
16
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
A question-answering model fine-tuned on the squad_v2 dataset based on the distilbert-base-uncased model
Question Answering System Transformers
D
haddadalwi
17
0
Distilbert Base Uncased Becas 6
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on the becasv2 dataset, primarily used for text generation tasks.
Large Language Model Transformers
D
Evelyn18
17
0
Distilbert Base Uncased Becas 4
Apache-2.0
A text classification model fine-tuned on the becasv2 dataset based on distilbert-base-uncased
Large Language Model Transformers
D
Evelyn18
20
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
This model is a fine-tuned version of the DistilBERT base model on the SQuAD question answering dataset, specifically designed for question answering tasks.
Question Answering System Transformers
D
ashhyun
16
0
Roberta Base Finetuned Squad
MIT
A question-answering model fine-tuned on the SQuAD 2.0 dataset based on RoBERTa-base, designed to answer questions based on given text
Question Answering System Transformers
R
janeel
16
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
This model is a fine-tuned version of DistilBERT on the SQuAD Q&A dataset, suitable for question-answering tasks.
Question Answering System Transformers
D
anu24
16
0
Bert Base Uncased Squad V2.0 Finetuned
Apache-2.0
This model is a fine-tuned version of bert-base-uncased on the squad_v2 dataset, suitable for question-answering tasks.
Question Answering System Transformers
B
kamalkraj
84
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
A Q&A model based on DistilBERT, fine-tuned on the SQuAD dataset for reading comprehension tasks
Question Answering System Transformers
D
ak987
15
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase